Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What would you use if you couldn't use your current distribution/operating system?

  • Linux
  • Windows
  • BSD
  • ChromeOS / Android
  • macOS / iOS
  • Open[DOS, Solaris, STEP, VMS]
  • I don't use a computer you insensitive clod!
  • Other (describe in comments)

[ Results | Polls ]
Comments:164 | Votes:226

posted by hubie on Saturday March 14, @06:06PM   Printer-friendly

https://www.slashgear.com/2119616/americans-diy-solar-panels-avoid-high-electric-bill/

Homeowners and renters alike have seen utility costs increase in the mid-2020s, directly impacting their monthly budgets. This is due to a range of factors, from some providers having virtual monopolies in specific areas to attempts to offset the costs of power-hungry AI data centers. Even common mistakes can run up utility costs. To that end, some have turned to homemade solar setups to generate their own energy and save money — all without letting their utility providers know, avoiding potentially costly connection fees on their already rising bills.

These small, relatively simple setups are known as balcony or plug-in solar: two to four panels that can be placed in an outdoor area, like a balcony, and powered via a wall outlet to harvest solar energy. These setups are easy to hook up, much simpler than a full-on solar roof installation, and save money on utility bills — alongside the usual environmental benefits of solar panels.

Reports by outlets such as Canary Media, The Washington Post, and CNN indicate that savings can range from around $100 per year to $35 to $50 per month. Exact numbers depend on elements like location, existing utility rates, and the size and strength of the solar setup. While DIY solar does seem like a good idea, there's also the legal side of it to be aware of. Unfortunately, laws across the United States are a bit hazy on the matter, but that could change soon.

[...] At the time of publication, the only state with solid DIY solar laws on the books is Utah. In 2025, the state passed House Bill 340, or the Solar Power Amendments bill, which approves and encourages the use of small solar setups in residential settings. These devices and their owners are exempt from needing approval and from utility provider fees. While many states haven't even introduced potential legislation on this front, others, such as New York and California, have taken steps to make balcony solar legal and offer protections to those hoping to equip their homes with it.

Depending on size, plug-in solar setups can cost between a few hundred and over $1,000, offering between 200 and 800 watts of power. This means that users should have no problem running lights, device chargers, radios, and fans, amongst others, from their balcony solar setups. Some may even be able to run appliances like refrigerators.

[...] While customer complaints indicate that some solar companies are worth avoiding, there are many more vying for the business of homeowners and renters. Given advancing legislation supporting balcony solar and increasing energy costs spurring its adoption, it stands to reason that some of these companies' offerings will soon become more popular with Americans.


Original Submission

posted by hubie on Saturday March 14, @01:19PM   Printer-friendly
from the who-needs-Photoshop-when-you-have-AI? dept.

Troubled SaaS icon Adobe tumbled after hours, sending its stock to 7 year lows after the company announced that CEO Shantanu Narayen will resign from the creative software giant amid deep skepticism about the company's ability to survive and thrive in the AI era:

The CEO change "adds questions around strategic continuity, capital allocation priorities, and pace of innovation," Grace Harmon, an analyst at Emarketer, said in an email. "Investors will likely focus on whether incoming leadership maintains a balance between disciplined execution and aggressive AI investment, especially as competition in creative and enterprise AI intensifies."

The company also gave a sales forecast for the current quarter that just topped estimates, but failed to ease investor fears that the software maker is being left behind by new competitors.

In the fiscal first quarter, revenue increased 12% to $6.4 billion, compared with analysts' average estimate of $6.28 billion. Adjusted earnings were $6.06 a share in the period, which ended Feb. 27. The average projection was $5.88 a share.

Annual recurring revenue for the company's AI-first products such as Firefly more than tripled compared to the same period last year, Narayen said in a script prepared for a conference call scheduled after the results. In September, Adobe said sales from these products exceeded $250 million.

[...] The shares fell about 6% in extended trading after closing at $269.78 in New York. The stock has declined about 23% this year, and is about to drop the lowest level since 2019.


Original Submission

posted by hubie on Saturday March 14, @08:38AM   Printer-friendly

Most of the devices are made by Asus and are located in the US:

Researchers say they have uncovered a takedown-resistant botnet of 14,000 routers and other network devices—primarily made by Asus—that have been conscripted into a proxy network that anonymously carries traffic used for cybercrime.

The malware—dubbed KadNap—takes hold by exploiting vulnerabilities that have gone unpatched by their owners, Chris Formosa, a researcher at security firm Lumen's Black Lotus Labs, told Ars. The high concentration of Asus routers is likely due to botnet operators acquiring a reliable exploit for vulnerabilities affecting those models. He said it's unlikely that the attackers are using any zero-days in the operation.

The number of infected routers averages about 14,000 per day, up from 10,000 last August, when Black Lotus discovered the botnet. Compromised devices are overwhelmingly located in the US, with smaller populations in Taiwan, Hong Kong, and Russia. One of the most salient features of KadNap is a sophisticated peer-to-peer design based on Kademlia, a network structure that uses distributed hash tables to conceal the IP addresses of command-and-control servers. The design makes the botnet resistant to detection and takedowns through traditional methods.

"The KadNap botnet stands out among others that support anonymous proxies in its use of a peer-to-peer network for decentralized control," Formosa and fellow Black Lotus researcher Steve Rudd wrote Wednesday. "Their intention is clear: avoid detection and make it difficult for defenders to protect against."

[...] Kademlia uses a 160-bit space to designate (1) keys—which are unique bitstrings derived by hashing a chunk of data—and (2) node IDs, both of which are assigned to each node. Nodes then store the keys of other nodes. The stored keys are organized by their similarity to the ID of the node storing them. Proximity is measured by XOR distance, a mathematical means of mapping a network. When a node polls another node, it uses this metric to locate other nodes with the closest distance to the key it's looking for until it finally finds a match. KadNap, a variant of Kademlia, obtains the key to be searched through a BitTorrent node.

Distributed hash tables [DHT] helps you get closer and closer to a target. You first reach out to some entry bittorrent nodes and basically say "hey I have this secret passphrase. I'm looking for who to give it to." So you give it to a couple of nearby "neighbors" and they say "ah ok I don't fully understand this passphrase but it's kind of familiar and here are some people who may know what that means. So now you go to those neighbors and the process continues. Eventually you reach someone who says "Yes! This is my passphrase, welcome in." In our case, when we reach this person they say here is a file to firewall port 22 and then here is a second file containing the C2 address you want to connect to.

Despite the resistance to normal takedown methods, Black Lotus says it has devised a means to block all network traffic to or from the control infrastructure." The lab is also distributing the indicators of compromise to public feeds to help other parties block access.

[...] People who are concerned their devices are infected can check this page for IP addresses and a file hash found in device logs. To disinfect devices, they must be factory reset. Because KadNap stores a shell script that runs when an infected router reboots, simply restarting the device will result in it being compromised all over again. Device owners should also ensure all available firmware updates have been installed, that administrative passwords are strong, and that remote access has been disabled unless needed.


Original Submission

posted by hubie on Saturday March 14, @03:51AM   Printer-friendly
from the engineer-inside dept.

Craig H. Barratt's appointment should settle a governance issue over Intel's Foundry future:

Intel made the announcement on March 3, clarifying that Yeary will not stand for reelection in May. It's the most consequential governance change at Intel since the board’s forced exit of former CEO Pat Gelsinger in late 2024. It also consolidates authority around Barratt, who will lead the board’s focus on scaling U.S. R&D and manufacturing, and CEO Lip-Bu Tan, who has publicly supported Intel Foundry since taking the role.

[...] But some industry analysts have reported that Yeary and Tan were not aligned on keeping Intel Foundry. In a post to X on August 7, Citrini Research analyst Jukan cited insiders who claimed that Yeary drafted a plan to spin off Intel Foundry as an independent entity, bring in minority investment from companies including Nvidia and Amazon, and effectively step back from contract manufacturing as a core business.

[...] What’s confirmed is that Yeary is leaving, and the new chair has publicly stated that his focus will be on supporting “rigorous execution” in investing and scaling U.S. R&D and manufacturing.

Intel launched Panther Lake (Core Ultra Series 3) on 18A at CES in January, with consumer systems shipping later that month, making it the first commercial platform built on the node and the most advanced process ever manufactured in the United States, using RibbonFET gate-all-around transistors and PowerVia backside power delivery.

Yields are sufficient to support Panther Lake shipments, but Intel CFO David Zinsner said in October that they are not yet high enough to deliver normal profit margins, with industry-standard yield results not expected until 2027. It’s understood that Intel doesn’t plan to add significant 18A capacity in 2026 beyond current commitments.

In Barratt, Intel will gain a chair who has publicly committed to scaling U.S. manufacturing, who operates inside a semiconductor ecosystem that depends on foundry diversity, and who has worked inside Intel's own infrastructure business. That’ll obviously matter to potential foundry customers doing long-term planning who need confidence that Intel Foundry will still exist and be resourced in five years.

"The company has taken significant steps to strengthen its financial position, advance its technology and product roadmap, and enhance operational discipline," said Barratt. "The board thanks Frank for his leadership and for helping position Intel for this next phase."

Intel's board will also shrink from 12 to 11 members following the May meeting, as Yeary's seat will not be filled. Given that four new directors with technology operating backgrounds have joined the board since 2024, its overall composition is moving in a consistent direction away from purely financial oversight and towards technical expertise.


Original Submission

posted by janrinok on Friday March 13, @11:02PM   Printer-friendly

https://arstechnica.com/ai/2026/03/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes/

Amazon's ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a "deep dive" into a spate of outages, including incidents tied to the use of AI coding tools.

The online retail giant said there had been a "trend of incidents" in recent months, characterized by a "high blast radius" and "Gen-AI assisted changes" among other factors, according to a briefing note for the meeting seen by the FT.

Under "contributing factors" the note included "novel GenAI usage for which best practices and safeguards are not yet fully established."

"Folks, as you likely know, the availability of the site and related infrastructure has not been good recently," Dave Treadwell, a senior vice-president at the group, told employees in an email, also seen by the FT.

The note ahead of Tuesday's meeting did not specify which particular incidents the group planned to discuss.

Amazon's website and shopping app went down for nearly six hours this month in an incident the company said involved an erroneous "software code deployment." The outage left customers unable to complete transactions or access functions such as checking account details and product prices.

Treadwell, a former Microsoft engineering executive, told employees that Amazon would focus its weekly "This Week in Stores Tech" (TWiST) meeting on a "deep dive into some of the issues that got us here as well as some short immediate term initiatives" the group hopes will limit future outages.

He asked staff to attend the meeting, which is normally optional.

Junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes, Treadwell added.

Amazon said the review of website availability was "part of normal business" and it aims for continual improvement.

"TWiST is our regular weekly operations meeting with a specific group of retail technology leaders and teams where we review operational performance across our store," the company said.

Separately, the company's cloud computing arm—Amazon Web Services—has suffered at least two incidents linked to the use of AI coding assistants, which the company has been actively rolling out to its staff.

AWS suffered a 13-hour interruption to a cost calculator used by customers in mid-December after engineers allowed the group's Kiro AI coding tool to make certain changes, and the AI tool opted to "delete and recreate the environment," the FT previously reported.

Amazon previously said the incident in December was an "extremely limited event" affecting only a single service in parts of mainland China. Amazon added that the second incident did not have an impact on a "customer facing AWS service."

The FT previously reported multiple Amazon engineers said their business units had to deal with a higher number of "Sev2s"—incidents requiring a rapid response to avoid product outages—each day as a result of job cuts.

Amazon has undertaken multiple rounds of lay-offs in recent years, most recently eliminating 16,000 corporate roles in January. The group has disputed the claim that headcount cuts were responsible for an increase in recent outages.


Original Submission

posted by janrinok on Friday March 13, @06:19PM   Printer-friendly

According to Tech Review, a number of different "dashboard" websites have popped up recently, displaying a variety of data sources related to the current war in Iran. They are meant to mimic the sorts of dashboard displays used by governments...but these have been "vibe coded" and don't have traceable sources of data. As well as being entertaining, it seems that these are often tied to gambling on various war-related topics. Story at https://www.technologyreview.com/2026/03/09/1134063/how-ai-is-turning-the-iran-conflict-into-theater/ or if paywalled, try https://archive.ph/G0zut

AI-enabled dashboards, combined with prediction markets and fake imagery, are reshaping how war is observed.
[...]

As a journalist, I believe these sorts of intelligence tools have a lot of promise. While many of us know that real-time data on shipping routes or power outages exist, it's a powerful thing to actually see it all assembled in one place (though using it to watch a war unfold while you munch on popcorn and place bets turns the war into perverse entertainment). But there are real reasons to think that these sorts of raw data feeds are not as informative as they may feel.

Craig Silverman, a digital investigations expert who teaches investigative techniques, has been keeping a log of these dashboards (he's up to 20). "The concern," he says, "is there's an illusion of being on top of things and being in control, where all you're really doing is just pulling in a ton of signals and not necessarily understanding what you're seeing, or being able to pull out true insights from it." [Silverman page, debunking many things, https://x.com/CraigSilverman ]

One problem has to do with the quality of the information. Many dashboards feature "intel feeds" with AI-generated summaries of complex, ever-changing news events. These can introduce inaccuracies. By design, the data is not especially curated. Instead, the feeds just display everything at once, with a map of strike locations in Iran next to the prices of obscure cryptocurrencies.

Anyone here tried one of these sites? I suspect you need a big monitor...


Original Submission

posted by janrinok on Friday March 13, @01:37PM   Printer-friendly

https://www.thewave.engineer/articles.html/productivity/legos-0002mm-specification-and-its-implications-for-manufacturing-r120/

A 2x4 LEGO brick manufactured in 1958 will snap perfectly onto a brick molded this morning in Denmark, China, Hungary, Mexico, or the Czech Republic. The 66-year-old brick will have the exact same interference fit, the same clutch power, the same 4.8mm stud diameter. This is the result of maintaining mold tolerances to 0.01mm (10 microns) across billions of parts annually.

For hardware engineers developing products with tight-fit mechanical interfaces, LEGO represents an extreme case study in what's possible when you can't compromise on dimensional consistency. A brick that's 0.02mm oversize won't fit into existing structures. A brick that's 0.02mm undersize falls apart when you pick it up. There is no acceptable tolerance range for functional failure. This creates engineering constraints that most consumer products never face. Understanding how LEGO achieves this - and more importantly, where they make deliberate trade-offs - provides practical frameworks for tolerance analysis, mold design, and manufacturing process control.

The frequently cited "0.002mm tolerance" is misleading without context. LEGO's actual mold precision is 10 microns, but different features have different critical tolerances. The cylindrical studs on top are 4.8mm in diameter with a tolerance of ±0.01mm. The hollow tubes underneath create the interference fit that makes bricks stick together. Standard bricks are 9.6mm tall, and three plates stack to exactly one brick height. The cumulative tolerance across a stack of 100 bricks determines whether a tall structure maintains dimensional accuracy.


Original Submission

posted by janrinok on Friday March 13, @08:52AM   Printer-friendly
from the rocks-economy dept.

(15 Dec 2025) Fortescue Infinity Train gets 14.5 MWh battery that never needs charging [update]
Goes loaded downhill and recharges the 14.5 MWh battery by "regenerative braking" with enough energy to drag the empty train cars uphill - didn' quite fully work.

(15 Feb 2026) Fortescue trials battery-electric locomotives in Pilbara as decarbonisation race tightens

Fortescue launched two battery-electric locomotives this week, rounding out its fleet of 70 diesel-powered machines hauling precious iron-ore from pit to port.
...

The locomotive's battery is the equivalent of "200 to 300 average electric vehicles" and capable of powering a refrigerator for 30 years, according to Mr Otranto.

...
The locomotives, purpose-built by Caterpillar subsidiary Progress Rail, boast what Fortescue has called the world's largest land-mobile battery, with a capacity of 14.5 megawatt-hours.

The pair will save the company 1 million litres of diesel each year, still just a fraction of the 80 million litres the company consumes annually.

"It is a large undertaking: these take probably a couple of years to manufacture, so once we pull an order, you can see it will take a couple of years to transition the entire fleet," Mr Otranto said.

The company hopes to complete the transition ahead of its "real-zero" deadline of 2030.
...
The locomotives' massive battery will be charged in two ways.

The first is via Fortescue's growing renewable energy apparatus, which it says it is expanding aggressively at a rate of more than 3,000 solar panels a day.

The second charging method is through regenerative braking, a mechanism drawing closely from the company's stalled "Infinity Train" concept.

Andrew Forrest had previously touted an in-house electric rail model, developed with Australian engineering firm Downer Group, that generated all the power it needed using the uphill-downhill dynamics of the Pilbara ranges.

The project was canned, however, in September, axing more than 100 staff.


Original Submission

posted by janrinok on Friday March 13, @04:09AM   Printer-friendly

https://arstechnica.com/science/2026/03/ig-nobels-ceremony-moves-to-europe-over-security-concerns/

Every year, we have a blast covering a fresh crop of winners of the Ig Nobel prizes. After 35 years in Boston, the annual prize ceremony will take place in Zurich, Switzerland, this year and will continue to be held in a European city for the foreseeable future. The reason: concerns about the safety of international travelers, who are increasingly reluctant to travel to the US to participate.

"During the past year, it has become unsafe for our guests to visit the country," Marc Abrahams, master of ceremonies and editor of The Annals of Improbable Research magazine, told The Associated Press. "We cannot in good conscience ask the new winners, or the international journalists who cover the event, to travel to the US this year."

Established in 1991, the Ig Nobels are a good-natured parody of the Nobel Prizes; they honor "achievements that first make people laugh and then make them think." As the motto implies, the research being honored might seem ridiculous at first glance, but that doesn't mean it's devoid of scientific merit. The unapologetically campy awards ceremony features miniature operas, scientific demos, and the 24/7 lectures, in which experts must explain their work twice: once in 24 seconds and again in just seven words.

Traditionally, the awards ceremony and related Ig Nobel events have taken place in Boston at Harvard University, Massachusetts Institute of Technology, and Boston University. However, four of last year's 10 winners opted to skip the ceremony rather than travel to the US, and the situation has not improved.

Nor is it just the Ig Nobels being affected by the hostile US environment for international travel. Many international gaming developers are choosing to skip this year's weeklong Game Developers Conference in San Francisco, citing similar concerns. "I honestly don't know anyone who is not from the US who is planning on going to the next GDC," Godot Foundation Executive Director Emilio Coppola, who's based in Spain, told Ars. "We never felt super safe, but now we are not willing to risk it."

So this year, the Ig Nobel organizers are joining forces with the ETH Domain and the University of Zurich for hosting duties. "Switzerland has nurtured many unexpected good things—Albert Einstein's physics, the world economy, and the cuckoo clock leap to mind—and is again helping the world appreciate improbable people and ideas," Abraham said.

The Ig Nobels will not be returning to the US any time soon. Instead, the plan is for Zurich to host every second year; every odd-numbered year, the ceremony will be hosted by a different European city. Abraham likened the arrangement to the Eurovision Song Contest.


Original Submission

posted by janrinok on Thursday March 12, @11:23PM   Printer-friendly

Two deeply rooted assumptions in global demographic debates are challenged: that fertility will rebound as societies develop, and that "replacement-level fertility" is an ideal to be pursued:

Drawing on the latest evidence, the authors show that neither view is supported by available data and argue that persistently low fertility can be sustainable and even economically desirable.

In their piece, published in Nature Human Behaviour, IIASA Distinguished Emeritus Research Scholar Wolfgang Lutz and IIASA Senior Researcher Guillaume Marois, who is also an associate professor at the Asian Demographic Research Institute of the Shanghai University, respond to political and public concern over declining birth rates in highly developed countries. While low fertility is increasingly framed as a crisis, associated with population ageing, labor shortages, and fiscal pressure, the authors argue that this narrative is based on outdated assumptions that no longer reflect current demographic realities.

A central motivation for the paper was the widespread belief based on earlier studies that fertility would recover as human development continues. However, using the most recent data up to 2023, the authors demonstrate that this pattern has reversed. Today, the global cross-sectional relationship is clearly negative: the higher a country's Human Development Index, the lower its fertility tends to be.

"This finding came as a surprise to much of the demographic community," says Marois. "Even countries once considered models for balancing work and family life, such as the Nordics, have experienced unexpectedly steep fertility declines. The idea that development alone will bring fertility back up simply doesn't hold anymore."

The commentary also questions the normative status of replacement-level fertility, often defined as 2.1 children per woman. This benchmark, the authors argue, is an artificial construct that only leads to long-term population stability under unrealistic assumptions, notably the absence of further mortality decline. More importantly, population stability does not automatically translate into economic or social wellbeing.

Instead, the authors emphasize that economic sustainability depends more on population structure than on population size. Higher levels of education, increased labor force participation, and rising productivity can offset – and even outweigh – the effects of having fewer births. Lower fertility can enable greater investment per child, strengthening human capital and innovation while reducing dependency burdens over the coming decades.

The policy implications are clear. While pro-natalist measures can improve family wellbeing, increasing fertility should not be their main objective, as their impact on fertility is typically modest and higher fertility does not necessarily improve economic wellbeing. Governments should instead adapt social security, labor market, and pension systems to the reality of sustained low fertility, while strengthening investments in education and productivity. This shift is particularly relevant for countries such as South Korea, China, and Japan, which currently record some of the world's lowest fertility rates and face especially intense political pressure to raise birth numbers.

"Our message is not that low fertility is inherently good or bad," concludes Lutz. "There is no single 'ideal' fertility level that guarantees prosperity. Instead of trying to push birth rates back to an arbitrary target, governments should focus on adapting social security systems to the changing demographic realities and invest strongly in education and productivity. Under those conditions, societies can thrive even with fewer births."

Journal Reference: Marois, G., Lutz, W. (2026). Low fertility may persist and could be good for the economy. Nature Human Behaviour DOI: https://www.nature.com/articles/s41562-026-02423-6


Original Submission

posted by janrinok on Thursday March 12, @06:39PM   Printer-friendly

https://www.tomshardware.com/tech-industry/artificial-intelligence/openais-massive-stargate-data-center-canceled-as-firm-cant-reach-terms-with-oracle-operator-struggles-with-reliability-issues-meta-said-to-be-interested-in-snatching-excess-capacity

Since mid-2025, Oracle, Crusoe, and OpenAI have discussed increasing data center power capacity from about 1.2 GW to roughly 2.0 GW, amid reluctance from locals. Negotiations got complicated due to difficult financing terms and OpenAI's shifting capacity forecasts, which led to their collapse, according to Bloomberg. Nonetheless, development of the 1,000-acre campus remains underway, and multiple facilities are already in service, though preliminary agreements to rent a substantial expansion were ultimately dropped. Could it be a signal that the Stargate project fails while the whole AI industry is on the rise?

The Abilene campus remains one of the biggest AI data center projects announced so far, yet to date, it has been known primarily as a part of the widely publicized Stargate project. Oracle has been rapidly installing Nvidia-based servers used by OpenAI to train and deploy AI models and systems. However, relations between Oracle and Crusoe have been strained by reliability issues. Earlier this year, winter weather disrupted parts of the liquid-cooling infrastructure, forcing several buildings offline for multiple days. Both companies say cooperation remains strong and development continues swiftly, yet the source report clearly notes hiccups.

Given the rising tensions between the Stargate partners, Crusoe began searching for another tenant, according to Bloomberg. At that point, Nvidia reportedly stepped in to help ensure the site would continue deploying its hardware rather than systems powered by AMD. Furthermore, Nvidia provided Crusoe with a $150 million deposit and assisted efforts to attract Meta — which is not a part of the Stargate project — as a prospective tenant for the additional capacity, the report says. Meanwhile, Meta has yet to confirm its expansion at the Abilene campus.

Despite shelving the expansion of one Stargate project, Oracle's general partnership with OpenAI remains unchanged. In July last year, Oracle agreed to develop 4.5 GW of data center capacity for OpenAI, and that program continues. The companies have also announced projects in other locations, including a site near Detroit owned by Related Digital.

*One gigawatt is comparable to the output of a nuclear reactor and can supply electricity to hundreds of thousands of homes at peak usage. That being said, a nuclear power plant was not reported to be a part of negotiations, which perhaps explains why locals were against increasing power capacity using things like coal or gas generators, yet we are speculating here.


Original Submission

posted by janrinok on Thursday March 12, @01:57PM   Printer-friendly
from the every-move-you-make-every-step-you-take-I'll-be-watching-you dept.

When personalized ads get too intrusive, consumers are less likely to buy:

Years into the grand experiment of personalized digital marketing, most of us have had the experience: You search for a product — or just casually mention it. Suddenly, ads for that exact item stalk you across apps, websites, and social media. The targeting may be technically impressive, but it can feel unsettling.

That uneasy sentiment is the center of new research by Wayne Hoyer, professor of marketing and James L. Bayless/W.S. Farish Fund Chair for Free Enterprise at the McCombs School of Business at The University of Texas at Austin. He finds that when digital personalization crosses perceived boundaries, it triggers a powerful emotional response, which he calls "creepiness." That response can backfire on digital marketers by materially reducing consumers' willingness to buy.

The study, conducted by Hoyer and three marketing researchers from the University of Bern in Switzerland — Alisa Petrova, Lucia Malär, and Harley Krohmer — argues that creepiness is not a property of digital marketing itself. Instead, it is a structured emotional episode that unfolds inside the consumer in response to marketing.

The response has two parts: feeling ambiguous about what's behind a marketing message, then deciding it's a threatening form of surveillance.

"When consumers are exposed to these ads, they make an assessment of ambiguity, such as, 'What is this?' and whether this is intrusive surveillance, such as, 'Are they watching me?'" Hoyer explains.

"If the answer is yes, this creates a negative emotion that can negatively affect purchase intentions."

[...] What can brands do to mitigate feelings of creepiness?

In a final experiment, the researchers tested a variety of remedies, such as transparency about data use, assurances of good intentions, offers of discounts, and charitable donations. They also tried including positive emotional images in ads: pictures of kittens.

Perhaps unsurprisingly, the kittens proved somewhat effective at de-creeping consumer reactions and softening damage to their plans to buy. Offering monetary compensation also helped.

Overall, however, even the best interventions made only limited improvements to purchase intentions. Hoyer says, "Creepiness is robust and difficult to mitigate once triggered."

This means that prevention is key, he says. It's more effective to avoid creating bad feelings in the first place than try to repair them after the fact.

"Managers should focus on prevention by designing personalization practices that minimize ambiguity and try to avoid signals of intrusive surveillance," Hoyer says.

The study suggests developing a Creepiness Level Index as a tool to help marketers track negative reactions to digital ads.

Over the long run, though, the marketing risk might diminish, he adds. "It is possible that creepiness will decline as consumers become more used to personalization and more accepting of AI technology."

Journal Reference: https://doi.org/10.1002/mar.70089


Original Submission

posted by hubie on Thursday March 12, @09:11AM   Printer-friendly

The US and Iran are trading blows in the Gulf with a simple drone that costs as little as $50,000 to make:

The Shahed 136 drone was invented by Iran and then copied by the US, but was originally a cold war weapon.

Iran invented the relatively simple Shahed 136 attack drone, but is now fending off US copies launched against it in combat. Why, when the US military has expensive, cutting-edge and hi-tech weapons, is it making flimsy drones powered by a motorbike engine?

Iranian company Shahed Aviation Industries originally designed the 136. It is 2.6 metres long and can carry 15-kilogram payloads over distances of about 2500 kilometres. It travels at a relatively modest speed of around 185 kilometres per hour – far slower than cruise missiles or bomb-carrying aircraft. But it has the advantage of extremely low cost – perhaps as low as $50,000 per unit.

Shaheds are now used in their hundreds in daily strikes on Ukraine by Russia, requiring layers of air defence – including fighter jets, machine guns, missiles and interceptor drones – to try to bring them down before they hit civilian or military targets. They are even in use by Houthi forces in Yemen.

Iran has been using Shahed drones as well as a range of other hardware in attacks around the Gulf this week in retaliation for US and Israeli strikes. In return, the US military has used its Low-cost Uncrewed Combat Attack System (LUCAS), produced by Arizona-based Spektreworks, in combat for the first time against Iran, which is a reverse-engineered copy of the Shahed 136. This means that Iran’s own design is now being used against it.

[...] The US reportedly reverse-engineered the drone after capturing units from Iranian-backed militias in Iraq and Syria, and it was successfully test launched from a US Navy ship last year.

[...] “You’re knocking them out of the sky with ordnance that’s way more expensive not just than the Shahed, but sometimes it’s more expensive than the thing that the Shahed is actually hitting,” says King. “There have been loads of cases where the target the Shahed is hitting is cheaper than the Patriot missile [used to take it down]. The appearance of these kind of crude, but effective, remote systems changes the economic calculus of war in an interesting way.”

[...] Ian Muirhead at the University of Manchester, UK, who previously spent 23 years in the military, says that Shahed drones will never replace crewed aircraft or highly advanced missiles, but that they are increasingly finding a place in combat and that western militaries are learning lessons from the war in Ukraine and adopting similar weapons.

“A lot of modern weapons are extremely complex and expensive, and if you’re having large-scale conflicts like this, having lots of cheap, expendable weapons – particularly if you don’t have big armies any more – is more effective,” says Muirhead. “If you can send a thousand of them, you can overwhelm defences with cheap munitions.”

“It’s just economics: if it costs you 10 times more for your defence than it is for your attackers, you’re never going to be able to outpace the other side,” says Muirhead.


Original Submission

posted by jelizondo on Thursday March 12, @04:23AM   Printer-friendly

https://arstechnica.com/ai/2026/03/are-consumers-doomed-to-pay-more-for-electricity-due-to-data-center-buildouts/

Big Tech is set to agree to build its own power plants for data centers and shield consumers from rising electricity costs, but companies face daunting logistical obstacles to delivering on the pledge championed by President Donald Trump.

At a White House event on Wednesday, executives from Amazon, Google, Meta, Microsoft, xAI, Oracle, and OpenAI are due to sign the pledge to supply their own power instead of relying on a grid connection.

Trump hailed the plan in his State of the Union speech last week, promising US consumers that "no one's prices will go up" as a result of "energy demand from AI data centers."

But industry executives have suggested the commitment will not be binding, while experts warn it is likely impossible to fully insulate consumers from the extra power demand coming from the vast expansion of data centers to run AI.

"Regardless of how these data centers connect, behind the meter or as part of the network, you're going to increase demand," said Ari Peskoe, director at Harvard Law School's Electricity Law Initiative.

Independent power supplies for data centers most often come from gas turbines, which are in short supply and not always designed to provide continuous power. "We still need more of these turbines," Peskoe added.

Trump's pressure on big data center operators comes in response to consumer backlash and political pressure over rising power bills.

On the campaign trail in 2024, Trump pledged to cut energy bills in half within a year of taking office.

In reality, residential electricity costs rose by 6 percent nationwide in February, compared with a year before, according to the US Energy Information Administration.

States such as New Jersey and Pennsylvania, which have clusters of data centers, reported bigger increases at 16 percent and 19 percent respectively.

Natural gas prices, extreme weather, and the need to upgrade aging grid infrastructure have all contributed to higher costs—after decades of low investment in power plants and transmission lines. The hit to energy supplies from Trump's war against Iran could add to the problem.

Critics of data centers say they are increasing energy bills by adding to demand. US data center power demand will more than triple by 2035, rising from almost 35 gigawatts in 2024 to 106 GW, according to data from BloombergNEF.

To avoid political backlash and waits of up to four years for grid connections, tech companies are already building their own power supplies for many new data centers.

Nearly three-quarters of planned generation equipment for data centers is natural gas fired, according to energy research firm Cleanview, which is tracking 56 GW of projects across the US.

Wednesday's pledge would see tech companies expand these efforts to prevent higher power costs being pushed on to customer bills.

Josh Price, director of energy and utilities at strategy firm Capstone, said Big Tech was "trying to push back against the narrative that they're the bad guy."

But the boom in data center building is already pushing the limits of the supply chain for power generation, making it difficult for companies to meet their commitment to Trump.

Competition for gas turbines is fierce, with waits as long as seven years for new orders.

Turbine-maker GE Vernova said it would expand production by 25 percent, and Mitsubishi Power announced plans to double its output over the next two years. But manufacturers have been cautious about expanding capacity, and it may not be enough to meet booming demand.

Two-thirds of gas projects in development in the US have not announced a turbine manufacturer, according to Global Energy Monitor.

The price of gas turbines has risen sharply, and greater competition from tech companies will mean higher costs for utilities and industrial customers who also need generating capacity—costs that could still be passed on to ratepayers.

To overcome shortages, data centers are increasingly relying on alternatives. Companies, including Google and Microsoft, have also struck deals to reopen nuclear power plants, but these plans will take years to deliver.

In the near term, companies are using options such as reciprocal engines and diesel generators. Experts point out that these power sources, as well as ordinary gas turbines, are not designed to provide the kind of continuous power needed by data centers.

"They say, 'we have documented evidence that these can run 90 percent of the time'... But that's not the average use case," said Jigar Shah, an energy investor and former Department of Energy official.

Keeping these data centers, and their power supplies, operational for decades would also present challenges around securing spare parts and qualified technicians, he added.

Shah said: "The level of ineptitude by which the data center companies are sleepwalking into major problems just seems shocking for trillion-dollar companies."


Original Submission

posted by jelizondo on Wednesday March 11, @11:36PM   Printer-friendly

Two of ME-CENTRAL-1's three availability zones went offline after Iran targeted Amazon's cloud infrastructure:

AWS confirmed on its health dashboard that two facilities in the UAE were "directly struck" and that a third site in Bahrain sustained damage from a nearby explosion. The strikes caused structural damage, disrupted power delivery, and, in some cases, triggered fire suppression systems that produced additional water damage, according to the AWS Health Dashboard. Amazon told customers it expects recovery to be prolonged "given the nature of the physical damage involved".

These outages then cascaded into consumer-facing services across the Gulf. Ride-sharing and delivery platform Careem, payments firms Hubpay and Alaan, data management company Snowflake, and several major UAE banks — including Emirates NBD, First Abu Dhabi Bank, and Abu Dhabi Commercial Bank — all reported disruptions. AWS advised customers to activate disaster recovery plans and migrate workloads away from the affected Middle East regions.

Iran's Islamic Revolutionary Guard Corps stated it targeted the Bahrain facility specifically because AWS hosts U.S. military workloads there; AWS declined to comment on that claim. Sean Gorman, Air Force contractor and CEO of Zephr.xyz, told DefenseScoop on Tuesday that classified government workloads at Impact Level 4 and 5 are held in U.S.-only facilities, but acknowledged that “contractor and non-operational data… may have been impacted,” at the struck sites.

The attacks followed joint U.S.-Israeli strikes on Iran over the last week. AWS urged customers with workloads in the region to migrate to unaffected regions while repairs continue.


Original Submission